Jump to content

Artificial intelligence and moral enhancement

From Wikipedia, the free encyclopedia

Artificial intelligence and moral enhancement involves the application of artificial intelligence to the enhancement of moral reasoning and the acceleration of moral progress.

Artificial moral reasoning

[edit]

With respect to moral reasoning, some consider humans to be suboptimal information processors, moral judges, and moral agents.[1] Due to stress or time constraints, people often fail to consider all the relevant factors and information necessary to make well-reasoned moral judgments, people lack consistency, and they are prone to biases.

With the rise of artificial intelligence, artificial moral agents can perform and enhance moral reasoning, overcoming human limitations.

Ideal observer theory

[edit]

The classical ideal observer theory is a metaethical theory about the meaning of moral statements. It holds that a moral statement is any statement to which an "ideal observer" would react or respond in a certain way. An ideal observer is defined as being: (1) omniscient with respect to non-ethical facts, (2) omnipercipient, (3) disinterested, (4) dispassionate, (5) consistent, and (6) normal in all other respects.

Adam Smith and David Hume espoused versions of the ideal observer theory and Roderick Firth provided a more sophisticated and modern version.[2] An analogous idea in law is the reasonable person criterion.

Today, artificial intelligence systems are capable of providing or assisting in moral decisions, stating what we ought to morally do if we want to comply with certain moral principles.[1] Artificial intelligence systems can gather information from environments, process it utilizing operational criteria, e.g., moral criteria such as values, goals, and principles, and advise users on morally best courses of action.[3] These systems can enable humans to make (nearly) optimal moral choices that we do not or cannot usually perform because of lack of necessary mental resources or time constraints.

Artificial moral advisors can be compared and contrasted with ideal observers.[1] Ideal observers have to be omniscient and omnipercipient about non-ethical facts, while artificial moral advisors would just need to know those morally relevant facts which pertain to a decision.

Users can provide varying configurations and settings to instruct these systems, and this allows these systems to be relativist. Relativist artificial moral advisors would equip humans to be better moral judges and would respect their autonomy as both moral judges and moral agents.[1] For these reasons, and because artificial moral advisors would be disinterested, dispassionate, consistent, relational, dispositional, empirical, and objectivist, relativist artificial moral advisors could be preferable to absolutist ideal observers.[1]

Exhaustive versus auxiliary enhancement

[edit]

Exhaustive enhancement involves scenarios where human moral decision-making is supplanted, left entirely to machines. Some proponents consider machines as being morally superior to humans and that just doing as the machines say would constitute moral improvement.[4]

Opponents of exhaustive enhancement list five main concerns:[5] (1) the existence of pluralism may complicate finding consensuses on which to build, configure, train, or inform systems, (2) even if such consensuses could be achieved, people might still fail to construct good systems due to human or nonhuman limitations, (3) resultant systems might not be able to make autonomous moral decisions, (4) moral progress might be hindered, (5) it would mean the death of morality.

Dependence on artificial intelligence systems to perform moral reasoning would not only neglect the cultivation of moral excellence but actively undermine it, exposing people to risks of disengagement, of atrophy of human faculties, and of moral manipulation at the hands of the systems or their creators.[4]

Auxiliary enhancement addresses these concerns and involves scenarios where machines augment or supplement human decision-making. Artificial intelligence assistants would be tools to help people to clarify and keep track of their moral commitments and contexts while providing accompanying explanations, arguments, and justifications for conclusions. The ultimate decision-making, however, would rest with the human users.[4]

Some proponents of auxiliary enhancement also support educational technologies with respect to morality, technologies which teach moral reasoning, e.g., assistants which utilize the Socratic method.[5] It may be the case that a “right” or “best” answer to a moral question is a “best” dialogue which provides value for users.

Pluralism

[edit]

Artificial moral agents could be made to be configurable so as to be able to match the moral commitments of their users. This would preserve the existing pluralism in societies.[3]

Beyond matching their users’ moral commitments, artificial moral agents could emulate historical or contemporary philosophers and could adopt and utilize points of view, schools of thought, or wisdom traditions.[4] Responses produced by teams composed of multiple artificial moral agents could be a result of debate or other processes for combining their individual outputs.

See also

[edit]

References

[edit]
  1. ^ a b c d e Giubilini, Alberto; Savulescu, Julian (2018). "The Artificial Moral Advisor: The "Ideal Observer" Meets Artificial Intelligence". Philosophy & Technology. 31: 169–188. Retrieved 2023-07-01.
  2. ^ Firth, Roderick (March 1952). "Ethical Absolutism and the Ideal Observer". Philosophy and Phenomenological Research. 12 (3): 317–345. JSTOR 2103988.
  3. ^ a b Savulescu, Julian; Maslen, Hannah (2015). "Moral Enhancement and Artificial Intelligence: Moral AI?". In Romportl, Jan; Zackova, Eva; Kelemen, Jozef (eds.). Beyond Artificial Intelligence: The Disappearing Human-machine Divide. pp. 79–95.
  4. ^ a b c d Volkman, Richard; Gabriels, Katleen (2023). "AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement". Science and Engineering Ethics. 29 (2). Retrieved 2023-07-01.
  5. ^ a b Lara, Francisco; Deckers, Jan (2020). "Artificial Intelligence as a Socratic Assistant for Moral Enhancement". Neuroethics. 13 (3): 275–287. Retrieved 2023-07-01.